Expand description
A statistics-driven micro-benchmarking library written in Rust.
This crate is a microbenchmarking library which aims to provide strong statistical confidence in detecting and estimating the size of performance improvements and regressions, while also being easy to use.
See the user guide for examples as well as details on the measurement and analysis process, and the output.
Features:
- Collects detailed statistics, providing strong confidence that changes to performance are real, not measurement noise.
- Produces detailed charts, providing thorough understanding of your code’s performance behavior.
Modules
This module defines a trait that can be used to plug in different Futures executors into Criterion.rs’ async benchmarking support.
This module defines a set of traits that can be used to plug different measurements (eg. Unix’s Processor Time, CPU or GPU performance counters, etc.) into Criterion.rs. It also includes the WallTime struct which defines the default wall-clock time measurement.
This module provides an extension trait which allows in-process profilers
to be hooked into the --profile-time
argument at compile-time. Users of
out-of-process profilers such as perf don’t need to do anything special.
Macros
Macro used to define a function group for the benchmark harness; see the
criterion_main!
macro for more details.
Macro which expands to a benchmark harness.
Structs
Async/await variant of the Bencher struct.
Timer struct used to iterate a benchmarked function and measure the runtime.
Structure used to group together a set of related benchmarks, along with custom configuration settings for groups of benchmarks. All benchmarks performed using a benchmark group will be grouped together in the final report.
Simple structure representing an ID for a benchmark. The ID must be unique within a benchmark group.
The benchmark manager
Contains the configuration options for the plots generated by a particular benchmark or benchmark group.
Enums
Axis scaling type
Baseline describes how the baseline_directory is handled.
Argument to Bencher::iter_batched
and
Bencher::iter_batched_ref
which controls the
batch size.
Enum used to select the plotting backend.
This enum allows the user to control how Criterion.rs chooses the iteration count when sampling. The default is Auto, which will choose a method automatically based on the iteration time during the warm-up phase.
Enum representing different ways of measuring the throughput of benchmarked code. If the throughput setting is configured for a benchmark then the estimated throughput will be reported as well as the time per iteration.
Functions
A function that is opaque to the optimizer, used to prevent the compiler from optimizing away computations in a benchmark.